Interpretable entity representations (IERs) are sparse embeddings that are "human-readable" in that dimensions correspond to fine-grained entity types and values are predicted probabilities that a given entity is of the corresponding type. These methods perform well in zero-shot and low supervision settings. Compared to standard dense neural embeddings, such interpretable representations may permit analysis and debugging. However, while fine-tuning sparse, interpretable representations improves accuracy on downstream tasks, it destroys the semantics of the dimensions which were enforced in pre-training. Can we maintain the interpretable semantics afforded by IERs while improving predictive performance on downstream tasks? Toward this end, we propose Intermediate enTity-based Sparse Interpretable Representation Learning (ItsIRL). ItsIRL realizes improved performance over prior IERs on biomedical tasks, while maintaining "interpretability" generally and their ability to support model debugging specifically. The latter is enabled in part by the ability to perform "counterfactual" fine-grained entity type manipulation, which we explore in this work. Finally, we propose a method to construct entity type based class prototypes for revealing global semantic properties of classes learned by our model.
translated by 谷歌翻译
多任务学习在NLP中是有用的,因为实际上是希望在一系列任务中有一个型号的单个模型。在医疗领域,对任务的顺序培训可能有时是培训模型的唯一方法,因为因为对原始(潜在敏感)数据的访问不再可用,或者只是由于联合再培训所固有的计算成本。然而,顺序学习固有的一个主要问题是灾难性的遗忘,即,当为新任务更新模型时,对先前任务的准确性大幅下降。弹性重量整合是最近提出的解决这个问题的方法,但是将这种方法扩展到实践中使用的现代大型模型需要对模型参数进行强烈的独立假设,限制其有效性。在这项工作中,我们应用了Kronecker分解 - 最近的方法可以放松独立假设 - 以防止在规模的卷积和变压器的神经网络中灾难忘记。我们展示了该技术对在三个数据集中的医疗实体链接的重要和说明性任务中的有效性,证明了在新的医疗数据可用时,用于对现有方法进行有效更新的技术的能力。平均而言,当使用基于BERT的模型时,所提出的方法将灾难性忘记减少51%,相比使用标准弹性重量固结的27%减少,同时保持与模型参数数量成比例的空间复杂性。
translated by 谷歌翻译
关于比较治疗效果的最佳证据来自临床试验,其结果在非结构化的文章中据报道。医疗专家必须手动提取文章中的信息以告知决策,这是耗时和昂贵的。在这里,我们考虑(a)从描述临床试验(实体识别)的全文物品中提取治疗和结果的端到端任务,(b)推断前者的报告结果(关系萃取)。我们为此任务介绍了新数据,并评估最近在自然语言处理中获得类似任务的最先进结果的模型。然后,我们提出了一种新的方法,激励了通常介绍了如何呈现这些纯粹数据驱动的基线的试验结果。最后,我们对该模型进行了一定的评估,并具有非营利性寻求鉴定可能重新用癌症的现有药物,显示出端到端证据提取系统的潜在效用。
translated by 谷歌翻译
Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful "explanations" for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code to reproduce all experiments is available at https://github.com/successar/ AttentionExplanation.
translated by 谷歌翻译
Voice assistants are deployed widely and provide useful functionality. However, recent work has shown that commercial systems like Amazon Alexa and Google Home are vulnerable to voice-based confusion attacks that exploit design issues. We propose a systems-oriented defense against this class of attacks and demonstrate its functionality for Amazon Alexa. We ensure that only the skills a user intends execute in response to voice commands. Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones. For example, the Lyft ride-sharing Alexa skill has an Android app and a website. Our work shows how information from counterpart apps can help reduce dis-ambiguities in the skill invocation process. We build SkilIFence, a browser extension that existing voice assistant users can install to ensure that only legitimate skills run in response to their commands. Using real user data from MTurk (N = 116) and experimental trials involving synthetic and organic speech, we show that SkillFence provides a balance between usability and security by securing 90.83% of skills that a user will need with a False acceptance rate of 19.83%.
translated by 谷歌翻译
Sampling-based Model Predictive Control (MPC) is a flexible control framework that can reason about non-smooth dynamics and cost functions. Recently, significant work has focused on the use of machine learning to improve the performance of MPC, often through learning or fine-tuning the dynamics or cost function. In contrast, we focus on learning to optimize more effectively. In other words, to improve the update rule within MPC. We show that this can be particularly useful in sampling-based MPC, where we often wish to minimize the number of samples for computational reasons. Unfortunately, the cost of computational efficiency is a reduction in performance; fewer samples results in noisier updates. We show that we can contend with this noise by learning how to update the control distribution more effectively and make better use of the few samples that we have. Our learned controllers are trained via imitation learning to mimic an expert which has access to substantially more samples. We test the efficacy of our approach on multiple simulated robotics tasks in sample-constrained regimes and demonstrate that our approach can outperform a MPC controller with the same number of samples.
translated by 谷歌翻译
Sampling-based methods have become a cornerstone of contemporary approaches to Model Predictive Control (MPC), as they make no restrictions on the differentiability of the dynamics or cost function and are straightforward to parallelize. However, their efficacy is highly dependent on the quality of the sampling distribution itself, which is often assumed to be simple, like a Gaussian. This restriction can result in samples which are far from optimal, leading to poor performance. Recent work has explored improving the performance of MPC by sampling in a learned latent space of controls. However, these methods ultimately perform all MPC parameter updates and warm-starting between time steps in the control space. This requires us to rely on a number of heuristics for generating samples and updating the distribution and may lead to sub-optimal performance. Instead, we propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution. Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time. By using a normalizing flow parameterization of the distribution, we can leverage its tractable density to avoid requiring differentiability of the dynamics and cost function. Finally, we evaluate the proposed approach on simulated robotics tasks and demonstrate its ability to surpass the performance of prior methods and scale better with a reduced number of samples.
translated by 谷歌翻译
Finding an initial noise vector that produces an input image when fed into the diffusion process (known as inversion) is an important problem in denoising diffusion models (DDMs), with applications for real image editing. The state-of-the-art approach for real image editing with inversion uses denoising diffusion implicit models (DDIMs) to deterministically noise the image to the intermediate state along the path that the denoising would follow given the original conditioning. However, DDIM inversion for real images is unstable as it relies on local linearization assumptions, which result in the propagation of errors, leading to incorrect image reconstruction and loss of content. To alleviate these problems, we propose Exact Diffusion Inversion via Coupled Transformations (EDICT), an inversion method that draws inspiration from affine coupling layers. EDICT enables mathematically exact inversion of real and model-generated images by maintaining two coupled noise vectors which are used to invert each other in an alternating fashion. Using Stable Diffusion, a state-of-the-art latent diffusion model, we demonstrate that EDICT successfully reconstructs real images with high fidelity. On complex image datasets like MS-COCO, EDICT reconstruction significantly outperforms DDIM, improving the mean square error of reconstruction by a factor of two. Using noise vectors inverted from real images, EDICT enables a wide range of image edits--from local and global semantic edits to image stylization--while maintaining fidelity to the original image structure. EDICT requires no model training/finetuning, prompt tuning, or extra data and can be combined with any pretrained DDM. Code is available at https://github.com/salesforce/EDICT.
translated by 谷歌翻译
时间序列形状是最近发现对时间序列聚类有效(TSC)有效的歧视子序列。形状方便地解释簇。因此,TSC的主要挑战是发现高质量的可变长度形状以区分不同的簇。在本文中,我们提出了一种新型的自动编码器窗帘方法(AutoShape),这是第一次利用自动编码器和塑形器以不受欢迎的方式确定形状的研究。自动编码器的专门设计用于学习高质量的形状。更具体地说,为了指导潜在的表示学习,我们采用了最新的自我监督损失来学习不同变量的可变长度塑形塑形(时间序列子序列)的统一嵌入,并提出多样性损失,以选择歧视嵌入的嵌入方式统一空间。我们介绍了重建损失,以在原始时间序列空间中恢复形状,以进行聚类。最后,我们采用Davies Bouldin指数(DBI),将学习过程中的聚类性能告知AutoShape。我们介绍了有关自动赛的广泛实验。为了评估单变量时间序列(UTS)的聚类性能,我们将AutoShape与使用UCR存档数据集的15种代表性方法进行比较。为了研究多元时间序列(MTS)的性能,我们使用5种竞争方法评估了30个UEA档案数据集的AutoShape。结果证明了AutoShape是所有比较的方法中最好的。我们用形状来解释簇,并可以在三个UTS案例研究和一个MTS案例研究中获得有关簇的有趣直觉。
translated by 谷歌翻译
倾斜的随机生存森林(RSF)是一种用于右翼结果的合奏监督学习方法。斜RSF中的树是使用预测变量的线性组合生长的,以创建分支,而在标准RSF中,使用单个预测变量。倾斜的RSF集合通常比标准RSF合奏具有更高的预测准确性。但是,评估预测变量的所有可能的线性组合会诱导大量的计算开销,从而将应用限制为大规模数据集。此外,几乎没有开发用于解释斜RSF合奏的方法,与基于轴的对应物相比,它们仍然难以解释。我们介绍了一种提高斜力RSF计算效率的方法,以及一种用斜RSF估计单个预测变量重要性的方法。我们减少计算开销的策略是利用牛顿 - 拉夫森评分(Newton-Raphson)评分,这是一种经典的优化技术,我们适用于决策树的每个非叶子节点内的COX部分似然函数。我们通过在线性组合中否定了用于给定预测指标的每个系数,然后计算出降低的降低准确性,从而估计单个预测因子对斜RSF的重要性。通常,在基准测试实验中,我们发现,与现有的斜RSF相比,与现有软件相比,我们对斜RSF的实现速度约为450倍,而较高的Brier得分则要高450倍。我们在模拟研究中发现,“否定重要性”比置换重要性,莎普利添加性解释和先前引入的技术更可靠地区分相关和无关的预测因子,以基于方差分析来衡量斜RSF的可变重要性。当前研究中引入的方法可在AORSF R软件包中获得。
translated by 谷歌翻译